Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 296
Filter
1.
Front Oncol ; 14: 1370862, 2024.
Article in English | MEDLINE | ID: mdl-38601756

ABSTRACT

Introduction: The COVID-19 pandemic had collateral effects on many health systems. Cancer screening and diagnostic tests were postponed, resulting in delays in diagnosis and treatment. This study assessed the impact of the pandemic on screening, diagnostics and incidence of breast, colorectal, lung, and prostate cancer; and whether rates returned to pre-pandemic levels by December, 2021. Methods: This is a cohort study of electronic health records from the United Kingdom (UK) primary care Clinical Practice Research Datalink (CPRD) GOLD database. The study included individuals registered with CPRD GOLD between January, 2017 and December, 2021, with at least 365 days of clinical history. The study focused on screening, diagnostic tests, referrals and diagnoses of first-ever breast, colorectal, lung, and prostate cancer. Incidence rates (IR) were stratified by age, sex, and region, and incidence rate ratios (IRR) were calculated to compare rates during and after lockdown with rates before lockdown. Forecasted rates were estimated using negative binomial regression models. Results: Among 5,191,650 eligible participants, the first lockdown resulted in reduced screening and diagnostic tests for all cancers, which remained dramatically reduced across the whole observation period for almost all tests investigated. There were significant IRR reductions in breast (0.69 [95% CI: 0.63-0.74]), colorectal (0.74 [95% CI: 0.67-0.81]), and prostate (0.71 [95% CI: 0.66-0.78]) cancer diagnoses. IRR reductions for lung cancer were non-significant (0.92 [95% CI: 0.84-1.01]). Extrapolating to the entire UK population, an estimated 18,000 breast, 13,000 colorectal, 10,000 lung, and 21,000 prostate cancer diagnoses were missed from March, 2020 to December, 2021. Discussion: The UK COVID-19 lockdown had a substantial impact on cancer screening, diagnostic tests, referrals, and diagnoses. Incidence rates remained significantly lower than pre-pandemic levels for breast and prostate cancers and associated tests by December, 2021. Delays in diagnosis are likely to have adverse consequences on cancer stage, treatment initiation, mortality rates, and years of life lost. Urgent strategies are needed to identify undiagnosed cases and address the long-term implications of delayed diagnoses.

2.
J Bone Miner Res ; 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38619297

ABSTRACT

Evidence on the comparative effectiveness of osteoporosis treatments is heterogeneous. This may be attributed to different populations and clinical practice, but also to differing methodologies ensuring comparability of treatment groups before treatment effect estimation and the amount of residual confounding by indication. This study assessed the comparability of denosumab vs oral bisphosphonate (OBP) groups using propensity score (PS) methods and negative control outcome (NCO) analysis. A total of 280 288 women aged ≥50 years initiating denosumab or OBP in 2011-2018 were included from the UK Clinical Practice Research Datalink (CPRD) and the Danish National Registries (DNR). Balance of observed covariates was assessed using absolute standardised mean difference (ASMD) before and after PS weighting, matching, and stratification, with ASMD >0.1 indicating imbalance. Residual confounding was assessed using NCOs with ≥100 events. Hazard ratio (HR) and 95% confidence interval (CI) between treatment and NCO was estimated using Cox models. Presence of residual confounding was evaluated with two approaches1: >5% of NCOs with 95% CI excluding 1,2 >5% of NCOs with an upper CI <0.75 or lower CI >1.3. The number of imbalanced covariates before adjustment (CPRD 22/87; DNR 18/83) decreased, with 2-11% imbalance remaining after weighting, matching or stratification. Using approach 1, residual confounding was present for all PS methods in both databases (≥8% of NCOs), except for stratification in DNR (3.8%). Using approach 2, residual confounding was present in CPRD with PS matching (5.3%) and stratification (6.4%), but not with weighting (4.3%). Within DNR, no NCOs had HR estimates with upper or lower CI limits beyond the specified bounds indicating residual confounding for any PS method. Achievement of covariate balance and determination of residual bias were dependent upon several factors including the population under study, PS method, prevalence of NCO, and the threshold indicating residual confounding.


Treatment groups in clinical practice may not be comparable as patient characteristics differ according to the need for the prescribed medication, known as confounding. We assessed comparability of two common osteoporosis treatments, denosumab and oral bisphosphonate, in 280 288 postmenopausal women using electronic health records from UK Clinical Practice Research Datalink (CPRD) and Danish National Registries (DNR). We evaluated comparability of recorded patient characteristics with three propensity score (PS) methods, matching, stratification, and weighting. We assessed residual confounding from unrecorded patient characteristics via negative control outcomes (NCO), events known not to be associated with treatment such as delirium. We found that achieving comparability of osteoporosis treatment groups depended on the study population, PS method, and definition of residual confounding. Weighting and stratification performed the best in DNR and CPRD, respectively. Using a stricter threshold based on statistical significance for the NCO suggested the treatment groups were not comparable, except for PS stratification in DNR. Applying clinically significant thresholds of treatment effect size showed comparability using weighting in CPRD and all PS methods in DNR. Studies should consider more than one PS method to test robustness and identify the largest number of NCO to give the greatest flexibility in detecting residual confounding.

4.
Nat Hum Behav ; 2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38514769

ABSTRACT

Despite evidence indicating increased risk of psychiatric issues among COVID-19 survivors, questions persist about long-term mental health outcomes and the protective effect of vaccination. Using UK Biobank data, three cohorts were constructed: SARS-CoV-2 infection (n = 26,101), contemporary control with no evidence of infection (n = 380,337) and historical control predating the pandemic (n = 390,621). Compared with contemporary controls, infected participants had higher subsequent risks of incident mental health at 1 year (hazard ratio (HR): 1.54, 95% CI 1.42-1.67; P = 1.70 × 10-24; difference in incidence rate: 27.36, 95% CI 21.16-34.10 per 1,000 person-years), including psychotic, mood, anxiety, alcohol use and sleep disorders, and prescriptions for antipsychotics, antidepressants, benzodiazepines, mood stabilizers and opioids. Risks were higher for hospitalized individuals (2.17, 1.70-2.78; P = 5.80 × 10-10) than those not hospitalized (1.41, 1.30-1.53; P = 1.46 × 10-16), and were reduced in fully vaccinated people (0.97, 0.80-1.19; P = 0.799) compared with non-vaccinated or partially vaccinated individuals (1.64, 1.49-1.79; P = 4.95 × 10-26). Breakthrough infections showed similar risk of psychiatric diagnosis (0.91, 0.78-1.07; P = 0.278) but increased prescription risk (1.42, 1.00-2.02; P = 0.053) compared with uninfected controls. Early identification and treatment of psychiatric disorders in COVID-19 survivors, especially those severely affected or unvaccinated, should be a priority in the management of long COVID. With the accumulation of breakthrough infections in the post-pandemic era, the findings highlight the need for continued optimization of strategies to foster resilience and prevent escalation of subclinical mental health symptoms to severe disorders.

5.
Article in English | MEDLINE | ID: mdl-38523562

ABSTRACT

OBJECTIVE: We studied whether the use of hydroxychloroquine (HCQ) for COVID-19 resulted in supply shortages for patients with rheumatoid arthritis (RA) and systemic lupus erythematosus (SLE). METHODS: We used US claims data (IQVIA PHARMETRICS® Plus for Academics [PHARMETRICS]) and hospital electronic records from Spain (IMASIS) to estimate monthly rates of HCQ use between January 2019 and March 2022, in the general population, and in RA and SLE patients. Methotrexate (MTX) was use was estimated as a control. RESULTS: Over 13.5 million individuals (13,311,811 PHARMETRICS, 207,646 IMASIS) were included in the general population cohort. RA and SLE cohorts enrolled 135,259 and 39,295 patients respectively, in PHARMETRICS. Incidence of MTX and HCQ were stable before March 2020. On March 2020, the incidence of HCQ increased by 9- and 67-fold in PHARMETRICS and IMASIS respectively, to decrease in May 2020. Usage rates of HCQ went back to pre-pandemic trends in Spain, but remained high in the US, mimicking waves of COVID-19. No significant changes in HCQ use were noted among patients with RA and SLE. MTX use rates decreased during HCQ approval period for COVID-19 treatment. CONCLUSIONS: Use of HCQ increased dramatically in the general population in both Spain and the US during March and April 2020. While Spain returned to pre-pandemic rates after the first wave, use of HCQ remained high and followed waves of COVID-19 in the US. However, we found no evidence of general shortages in the use of HCQ for both RA and SLE in the US. This article is protected by copyright. All rights reserved.

6.
Heart ; 110(9): 635-643, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38471729

ABSTRACT

OBJECTIVE: To study the association between COVID-19 vaccination and the risk of post-COVID-19 cardiac and thromboembolic complications. METHODS: We conducted a staggered cohort study based on national vaccination campaigns using electronic health records from the UK, Spain and Estonia. Vaccine rollout was grouped into four stages with predefined enrolment periods. Each stage included all individuals eligible for vaccination, with no previous SARS-CoV-2 infection or COVID-19 vaccine at the start date. Vaccination status was used as a time-varying exposure. Outcomes included heart failure (HF), venous thromboembolism (VTE) and arterial thrombosis/thromboembolism (ATE) recorded in four time windows after SARS-CoV-2 infection: 0-30, 31-90, 91-180 and 181-365 days. Propensity score overlap weighting and empirical calibration were used to minimise observed and unobserved confounding, respectively.Fine-Gray models estimated subdistribution hazard ratios (sHR). Random effect meta-analyses were conducted across staggered cohorts and databases. RESULTS: The study included 10.17 million vaccinated and 10.39 million unvaccinated people. Vaccination was associated with reduced risks of acute (30-day) and post-acute COVID-19 VTE, ATE and HF: for example, meta-analytic sHR of 0.22 (95% CI 0.17 to 0.29), 0.53 (0.44 to 0.63) and 0.45 (0.38 to 0.53), respectively, for 0-30 days after SARS-CoV-2 infection, while in the 91-180 days sHR were 0.53 (0.40 to 0.70), 0.72 (0.58 to 0.88) and 0.61 (0.51 to 0.73), respectively. CONCLUSIONS: COVID-19 vaccination reduced the risk of post-COVID-19 cardiac and thromboembolic outcomes. These effects were more pronounced for acute COVID-19 outcomes, consistent with known reductions in disease severity following breakthrough versus unvaccinated SARS-CoV-2 infection.


Subject(s)
COVID-19 , Heart Failure , Venous Thromboembolism , Humans , COVID-19 Vaccines/adverse effects , COVID-19/epidemiology , COVID-19/prevention & control , Venous Thromboembolism/epidemiology , Venous Thromboembolism/etiology , Venous Thromboembolism/prevention & control , Cohort Studies , SARS-CoV-2 , Heart Failure/epidemiology , Vaccination
7.
BMJ Open Respir Res ; 11(1)2024 02 27.
Article in English | MEDLINE | ID: mdl-38413124

ABSTRACT

BACKGROUND: There is a lack of knowledge on how patients with asthma or chronic obstructive pulmonary disease (COPD) are globally treated in the real world, especially with regard to the initial pharmacological treatment of newly diagnosed patients and the different treatment trajectories. This knowledge is important to monitor and improve clinical practice. METHODS: This retrospective cohort study aims to characterise treatments using data from four claims (drug dispensing) and four electronic health record (EHR; drug prescriptions) databases across six countries and three continents, encompassing 1.3 million patients with asthma or COPD. We analysed treatment trajectories at drug class level from first diagnosis and visualised these in sunburst plots. RESULTS: In four countries (USA, UK, Spain and the Netherlands), most adults with asthma initiate treatment with short-acting ß2 agonists monotherapy (20.8%-47.4% of first-line treatments). For COPD, the most frequent first-line treatment varies by country. The largest percentages of untreated patients (for asthma and COPD) were found in claims databases (14.5%-33.2% for asthma and 27.0%-52.2% for COPD) from the USA as compared with EHR databases (6.9%-15.2% for asthma and 4.4%-17.5% for COPD) from European countries. The treatment trajectories showed step-up as well as step-down in treatments. CONCLUSION: Real-world data from claims and EHRs indicate that first-line treatments of asthma and COPD vary widely across countries. We found evidence of a stepwise approach in the pharmacological treatment of asthma and COPD, suggesting that treatments may be tailored to patients' needs.


Subject(s)
Asthma , Pulmonary Disease, Chronic Obstructive , Adult , Humans , Retrospective Studies , Administration, Inhalation , Bronchodilator Agents/therapeutic use , Adrenergic beta-2 Receptor Agonists/therapeutic use , Adrenal Cortex Hormones/therapeutic use , Pulmonary Disease, Chronic Obstructive/diagnosis , Pulmonary Disease, Chronic Obstructive/drug therapy , Pulmonary Disease, Chronic Obstructive/epidemiology , Asthma/diagnosis , Asthma/drug therapy , Asthma/epidemiology
8.
Clin Epidemiol ; 16: 71-89, 2024.
Article in English | MEDLINE | ID: mdl-38357585

ABSTRACT

Purpose: Few studies have examined how the absolute risk of thromboembolism with COVID-19 has evolved over time across different countries. Researchers from the European Medicines Agency, Health Canada, and the United States (US) Food and Drug Administration established a collaboration to evaluate the absolute risk of arterial (ATE) and venous thromboembolism (VTE) in the 90 days after diagnosis of COVID-19 in the ambulatory (eg, outpatient, emergency department, nursing facility) setting from seven countries across North America (Canada, US) and Europe (England, Germany, Italy, Netherlands, and Spain) within periods before and during COVID-19 vaccine availability. Patients and Methods: We conducted cohort studies of patients initially diagnosed with COVID-19 in the ambulatory setting from the seven specified countries. Patients were followed for 90 days after COVID-19 diagnosis. The primary outcomes were ATE and VTE over 90 days from diagnosis date. We measured country-level estimates of 90-day absolute risk (with 95% confidence intervals) of ATE and VTE. Results: The seven cohorts included 1,061,565 patients initially diagnosed with COVID-19 in the ambulatory setting before COVID-19 vaccines were available (through November 2020). The 90-day absolute risk of ATE during this period ranged from 0.11% (0.09-0.13%) in Canada to 1.01% (0.97-1.05%) in the US, and the 90-day absolute risk of VTE ranged from 0.23% (0.21-0.26%) in Canada to 0.84% (0.80-0.89%) in England. The seven cohorts included 3,544,062 patients with COVID-19 during vaccine availability (beginning December 2020). The 90-day absolute risk of ATE during this period ranged from 0.06% (0.06-0.07%) in England to 1.04% (1.01-1.06%) in the US, and the 90-day absolute risk of VTE ranged from 0.25% (0.24-0.26%) in England to 1.02% (0.99-1.04%) in the US. Conclusion: There was heterogeneity by country in 90-day absolute risk of ATE and VTE after ambulatory COVID-19 diagnosis both before and during COVID-19 vaccine availability.

9.
Lancet Respir Med ; 12(3): 225-236, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38219763

ABSTRACT

BACKGROUND: Although vaccines have proved effective to prevent severe COVID-19, their effect on preventing long-term symptoms is not yet fully understood. We aimed to evaluate the overall effect of vaccination to prevent long COVID symptoms and assess comparative effectiveness of the most used vaccines (ChAdOx1 and BNT162b2). METHODS: We conducted a staggered cohort study using primary care records from the UK (Clinical Practice Research Datalink [CPRD] GOLD and AURUM), Catalonia, Spain (Information System for Research in Primary Care [SIDIAP]), and national health insurance claims from Estonia (CORIVA database). All adults who were registered for at least 180 days as of Jan 4, 2021 (the UK), Feb 20, 2021 (Spain), and Jan 28, 2021 (Estonia) comprised the source population. Vaccination status was used as a time-varying exposure, staggered by vaccine rollout period. Vaccinated people were further classified by vaccine brand according to their first dose received. The primary outcome definition of long COVID was defined as having at least one of 25 WHO-listed symptoms between 90 and 365 days after the date of a PCR-positive test or clinical diagnosis of COVID-19, with no history of that symptom 180 days before SARS-Cov-2 infection. Propensity score overlap weighting was applied separately for each cohort to minimise confounding. Sub-distribution hazard ratios (sHRs) were calculated to estimate vaccine effectiveness against long COVID, and empirically calibrated using negative control outcomes. Random effects meta-analyses across staggered cohorts were conducted to pool overall effect estimates. FINDINGS: A total of 1 618 395 (CPRD GOLD), 5 729 800 (CPRD AURUM), 2 744 821 (SIDIAP), and 77 603 (CORIVA) vaccinated people and 1 640 371 (CPRD GOLD), 5 860 564 (CPRD AURUM), 2 588 518 (SIDIAP), and 302 267 (CORIVA) unvaccinated people were included. Compared with unvaccinated people, overall HRs for long COVID symptoms in people vaccinated with a first dose of any COVID-19 vaccine were 0·54 (95% CI 0·44-0·67) in CPRD GOLD, 0·48 (0·34-0·68) in CPRD AURUM, 0·71 (0·55-0·91) in SIDIAP, and 0·59 (0·40-0·87) in CORIVA. A slightly stronger preventative effect was seen for the first dose of BNT162b2 than for ChAdOx1 (sHR 0·85 [0·60-1·20] in CPRD GOLD and 0·84 [0·74-0·94] in CPRD AURUM). INTERPRETATION: Vaccination against COVID-19 consistently reduced the risk of long COVID symptoms, which highlights the importance of vaccination to prevent persistent COVID-19 symptoms, particularly in adults. FUNDING: National Institute for Health and Care Research.


Subject(s)
COVID-19 Vaccines , COVID-19 , Adult , Humans , BNT162 Vaccine , Cohort Studies , COVID-19/epidemiology , COVID-19/prevention & control , COVID-19 Vaccines/therapeutic use , Estonia , Post-Acute COVID-19 Syndrome , SARS-CoV-2 , Spain , United Kingdom/epidemiology
10.
J Am Geriatr Soc ; 72(2): 456-466, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37905683

ABSTRACT

BACKGROUND: Non-steroidal anti-inflammatory drugs (NSAIDs) should be used with caution in adults aged 65 years and older. Their gastrointestinal adverse event risk might be further reinforced when using concomitant cholinesterase inhibitors (ChEIs). We aimed to investigate the association between NSAIDs and ChEI use and the risk of peptic ulcers in adults aged 65 years and older. METHODS: Register-based self-controlled case series study including adults ≥65 years with a new prescription of ChEIs and NSAIDs, diagnosed with incident peptic ulcer in Sweden, 2007-2020. We identified persons from the Total Population Register individually linked to several nationwide registers. We estimated the incidence rate ratio (IRR) of peptic ulcer with a conditional Poisson regression model for four mutually exclusive risk periods: use of ChEIs, NSAIDs, and the combination of ChEIs and NSAIDs, compared with the non-treatment in the same individual. Risk periods were identified based on the prescribed daily dose, extracted via a text-parsing algorithm, and a 30-day grace period. RESULTS: Of 70,060 individuals initiating both ChEIs and NSAIDs, we identified 1500 persons with peptic ulcer (median age at peptic ulcer 80 years), of whom 58% were females. Compared with the non-treatment periods, the risk of peptic ulcer substantially increased for the combination of ChEIs and NSAIDs (IRR: 9.0, [6.8-11.8]), more than for NSAIDs alone (5.2, [4.4-6.0]). No increased risks were found for the use of ChEIs alone (1.0, [0.9-1.2]). DISCUSSION: We found that the risk of peptic ulcer associated with the concomitant use of NSAIDs and ChEIs was over and beyond the risk associated with NSAIDs alone. Our results underscore the importance of carefully considering the risk of peptic ulcers when co-prescribing NSAIDs and ChEIs to adults aged 65 years and older.


Subject(s)
Cholinesterase Inhibitors , Peptic Ulcer , Female , Humans , Aged, 80 and over , Male , Anti-Inflammatory Agents, Non-Steroidal/adverse effects , Peptic Ulcer/chemically induced , Peptic Ulcer/epidemiology , Case-Control Studies , Research Design , Risk Factors
11.
Value Health ; 27(2): 173-181, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38042335

ABSTRACT

OBJECTIVES: Generalizability of trial-based cost-effectiveness estimates to real-world target populations is important for decision making. In the context of independent aggregate time-to-event baseline and relative effects data, complex hazards can make modeling of data for use in economic evaluation challenging. Our article provides an overview of methods that can be used to apply trial-derived relative treatment effects to external real-world baselines when faced with complex hazards and follows with a motivating example. METHODS: Approaches for applying trial-derived relative effects to real-world baselines are presented in the context of complex hazards. Appropriate methods are applied in a cost-effectiveness analysis using data from a previously published study assessing the real-world cost-effectiveness of a treatment for carcinoma of the head and neck as a motivating example. RESULTS: Lack of common hazards between the trial and target real-world population, a complex baseline hazard function, and nonproportional relative effects made the use of flexible models necessary to adequately estimate survival. Assuming common distributions between trial and real-world reference survival substantially affected survival and cost-effectiveness estimates. Modeling time-dependent vs proportional relative effects affected estimates to a lesser extent, dependent on assumptions used in cost-effectiveness modeling. CONCLUSIONS: Appropriately capturing reference treatment survival when attempting to generalize trial-derived relative treatment effects to real-world target populations can have important impacts on cost-effectiveness estimates. A balance between model complexity and adequacy for decision making should be considered where multiple data sources with complex hazards are being evaluated.


Subject(s)
Cost-Effectiveness Analysis , Humans , Cost-Benefit Analysis
12.
Pharmacoepidemiol Drug Saf ; 33(1): e5717, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37876360

ABSTRACT

PURPOSE: Real-world data (RWD) offers a valuable resource for generating population-level disease epidemiology metrics. We aimed to develop a well-tested and user-friendly R package to compute incidence rates and prevalence in data mapped to the observational medical outcomes partnership (OMOP) common data model (CDM). MATERIALS AND METHODS: We created IncidencePrevalence, an R package to support the analysis of population-level incidence rates and point- and period-prevalence in OMOP-formatted data. On top of unit testing, we assessed the face validity of the package. To do so, we calculated incidence rates of COVID-19 using RWD from Spain (SIDIAP) and the United Kingdom (CPRD Aurum), and replicated two previously published studies using data from the Netherlands (IPCI) and the United Kingdom (CPRD Gold). We compared the obtained results to those previously published, and measured execution times by running a benchmark analysis across databases. RESULTS: IncidencePrevalence achieved high agreement to previously published data in CPRD Gold and IPCI, and showed good performance across databases. For COVID-19, incidence calculated by the package was similar to public data after the first-wave of the pandemic. CONCLUSION: For data mapped to the OMOP CDM, the IncidencePrevalence R package can support descriptive epidemiological research. It enables reliable estimation of incidence and prevalence from large real-world data sets. It represents a simple, but extendable, analytical framework to generate estimates in a reproducible and timely manner.


Subject(s)
COVID-19 , Data Management , Humans , Incidence , Prevalence , Databases, Factual , COVID-19/epidemiology
13.
Int J Epidemiol ; 53(1)2024 Feb 01.
Article in English | MEDLINE | ID: mdl-37833846

ABSTRACT

BACKGROUND: There are scarce data on best practices to control for confounding in observational studies assessing vaccine effectiveness to prevent COVID-19. We compared the performance of three well-established methods [overlap weighting, inverse probability treatment weighting and propensity score (PS) matching] to minimize confounding when comparing vaccinated and unvaccinated people. Subsequently, we conducted a target trial emulation to study the ability of these methods to replicate COVID-19 vaccine trials. METHODS: We included all individuals aged ≥75 from primary care records from the UK [Clinical Practice Research Datalink (CPRD) AURUM], who were not infected with or vaccinated against SARS-CoV-2 as of 4 January 2021. Vaccination status was then defined based on first COVID-19 vaccine dose exposure between 4 January 2021 and 28 January 2021. Lasso regression was used to calculate PS. Location, age, prior observation time, regional vaccination rates, testing effort and COVID-19 incidence rates at index date were forced into the PS. Following PS weighting and matching, the three methods were compared for remaining covariate imbalance and residual confounding. Last, a target trial emulation comparing COVID-19 at 3 and 12 weeks after first vaccine dose vs unvaccinated was conducted. RESULTS: Vaccinated and unvaccinated cohorts comprised 583 813 and 332 315 individuals for weighting, respectively, and 459 000 individuals in the matched cohorts. Overlap weighting performed best in terms of minimizing confounding and systematic error. Overlap weighting successfully replicated estimates from clinical trials for vaccine effectiveness for ChAdOx1 (57%) and BNT162b2 (75%) at 12 weeks. CONCLUSION: Overlap weighting performed best in our setting. Our results based on overlap weighting replicate previous pivotal trials for the two first COVID-19 vaccines approved in Europe.


Subject(s)
COVID-19 Vaccines , COVID-19 , Humans , BNT162 Vaccine , COVID-19/epidemiology , COVID-19/prevention & control , Propensity Score , SARS-CoV-2 , Vaccine Efficacy , Aged , Aged, 80 and over
14.
Article in English | MEDLINE | ID: mdl-38082839

ABSTRACT

Risk prediction tools are increasingly popular aids in clinical decision-making. However, the underlying models are often trained on data from general patient cohorts and may not be representative of and suitable for use with targeted patient groups in actual clinical practice, such as in the case of osteoporosis patients who may be at elevated risk of mortality. We developed and internally validated a cardiovascular mortality risk prediction model tailored to individuals with osteoporosis using a range of machine learning models. We compared the performance of machine learning models with existing expert-based models with respect to data-driven risk factor identification, discrimination, and calibration. The proposed models were found to outperform existing cardiovascular mortality risk prediction tools for the osteoporosis population. External validation of the model is recommended.Clinical Relevance- This study presents the performance of machine learning models for cardiovascular death prediction among osteoporotic patients as well as the risk factors identified by the models to be important predictors.


Subject(s)
Cardiovascular Diseases , Osteoporosis , Humans , Risk Assessment/methods , Risk Factors , Machine Learning , Osteoporosis/complications , Osteoporosis/diagnosis , Cardiovascular Diseases/diagnosis
15.
J Am Med Inform Assoc ; 31(1): 209-219, 2023 Dec 22.
Article in English | MEDLINE | ID: mdl-37952118

ABSTRACT

OBJECTIVE: Health data standardized to a common data model (CDM) simplifies and facilitates research. This study examines the factors that make standardizing observational health data to the Observational Medical Outcomes Partnership (OMOP) CDM successful. MATERIALS AND METHODS: Twenty-five data partners (DPs) from 11 countries received funding from the European Health Data Evidence Network (EHDEN) to standardize their data. Three surveys, DataQualityDashboard results, and statistics from the conversion process were analyzed qualitatively and quantitatively. Our measures of success were the total number of days to transform source data into the OMOP CDM and participation in network research. RESULTS: The health data converted to CDM represented more than 133 million patients. 100%, 88%, and 84% of DPs took Surveys 1, 2, and 3. The median duration of the 6 key extract, transform, and load (ETL) processes ranged from 4 to 115 days. Of the 25 DPs, 21 DPs were considered applicable for analysis of which 52% standardized their data on time, and 48% participated in an international collaborative study. DISCUSSION: This study shows that the consistent workflow used by EHDEN proves appropriate to support the successful standardization of observational data across Europe. Over the 25 successful transformations, we confirmed that getting the right people for the ETL is critical and vocabulary mapping requires specific expertise and support of tools. Additionally, we learned that teams that proactively prepared for data governance issues were able to avoid considerable delays improving their ability to finish on time. CONCLUSION: This study provides guidance for future DPs to standardize to the OMOP CDM and participate in distributed networks. We demonstrate that the Observational Health Data Sciences and Informatics community must continue to evaluate and provide guidance and support for what ultimately develops the backbone of how community members generate evidence.


Subject(s)
Global Health , Medicine , Humans , Databases, Factual , Europe , Electronic Health Records
16.
Nat Commun ; 14(1): 7449, 2023 11 17.
Article in English | MEDLINE | ID: mdl-37978296

ABSTRACT

Persistent symptoms following the acute phase of COVID-19 present a major burden to both the affected and the wider community. We conducted a cohort study including over 856,840 first COVID-19 cases, 72,422 re-infections and more than 3.1 million first negative-test controls from primary care electronic health records from Spain and the UK (Sept 2020 to Jan 2022 (UK)/March 2022 (Spain)). We characterised post-acute COVID-19 symptoms and identified key symptoms associated with persistent disease. We estimated incidence rates of persisting symptoms in the general population and among COVID-19 patients over time. Subsequently, we investigated which WHO-listed symptoms were particularly differential by comparing their frequency in COVID-19 cases vs. matched test-negative controls. Lastly, we compared persistent symptoms after first infections vs. reinfections.Our study shows that the proportion of COVID-19 cases affected by persistent post-acute COVID-19 symptoms declined over the study period. Risk for altered smell/taste was consistently higher in patients with COVID-19 vs test-negative controls. Persistent symptoms were more common after reinfection than following a first infection. More research is needed into the definition of long COVID, and the effect of interventions to minimise the risk and impact of persistent symptoms.


Subject(s)
COVID-19 , Post-Acute COVID-19 Syndrome , Humans , Cohort Studies , COVID-19/epidemiology , Electronic Health Records , Reinfection
17.
Drug Saf ; 46(12): 1335-1352, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37804398

ABSTRACT

INTRODUCTION: Individual case reports are the main asset in pharmacovigilance signal management. Signal validation is the first stage after signal detection and aims to determine if there is sufficient evidence to justify further assessment. Throughout signal management, a prioritization of signals is continually made. Routinely collected health data can provide relevant contextual information but are primarily used at a later stage in pharmacoepidemiological studies to assess communicated signals. OBJECTIVE: The aim of this study was to examine the feasibility and utility of analysing routine health data from a multinational distributed network to support signal validation and prioritization and to reflect on key user requirements for these analyses to become an integral part of this process. METHODS: Statistical signal detection was performed in VigiBase, the WHO global database of individual case safety reports, targeting generic manufacturer drugs and 16 prespecified adverse events. During a 5-day study-a-thon, signal validation and prioritization were performed using information from VigiBase, regulatory documents and the scientific literature alongside descriptive analyses of routine health data from 10 partners of the European Health Data and Evidence Network (EHDEN). Databases included in the study were from the UK, Spain, Norway, the Netherlands and Serbia, capturing records from primary care and/or hospitals. RESULTS: Ninety-five statistical signals were subjected to signal validation, of which eight were considered for descriptive analyses in the routine health data. Design, execution and interpretation of results from these analyses took up to a few hours for each signal (of which 15-60 minutes were for execution) and informed decisions for five out of eight signals. The impact of insights from the routine health data varied and included possible alternative explanations, potential public health and clinical impact and feasibility of follow-up pharmacoepidemiological studies. Three signals were selected for signal assessment, two of these decisions were supported by insights from the routine health data. Standardization of analytical code, availability of adverse event phenotypes including bridges between different source vocabularies, and governance around the access and use of routine health data were identified as important aspects for future development. CONCLUSIONS: Analyses of routine health data from a distributed network to support signal validation and prioritization are feasible in the given time limits and can inform decision making. The cost-benefit of integrating these analyses at this stage of signal management requires further research.


Subject(s)
Drug-Related Side Effects and Adverse Reactions , Pharmacovigilance , Humans , Adverse Drug Reaction Reporting Systems , Drug-Related Side Effects and Adverse Reactions/epidemiology , Databases, Factual , Netherlands
18.
Br J Surg ; 110(12): 1774-1784, 2023 11 09.
Article in English | MEDLINE | ID: mdl-37758504

ABSTRACT

BACKGROUND: Hand trauma, comprising injuries to both the hand and wrist, affects over five million people per year in the NHS, resulting in 250 000 operations each year. Surgical site infection (SSI) following hand trauma surgery leads to significant morbidity. Triclosan-coated sutures may reduce SSI in major abdominal surgery but have never been tested in hand trauma. Feasibility needs to be ascertained before a definitive trial can be delivered in hand trauma. METHODS: A multicentre feasibility RCT of antimicrobial sutures versus standard sutures involving adults undergoing surgery for hand trauma to evaluate feasibility for a definitive trial. Secondary objectives were incidence of SSI in both groups, hand function measured with patient-reported outcome measures, health-related quality of life and change in employment. Randomization was performed on a 1:1 basis, stratified by age of the patient and whether the injury was open or closed, using a secure, centralized, online randomization service. Participants were blinded to allocation. RESULTS: 116 participants were recruited and randomized (60 intervention, 56 control). Of 227 screened, most were eligible (89.5 per cent), and most who were approached agreed to be included in the study (84.7 per cent). Retention was low: 57.5 per cent at 30 days, 52 per cent at 90 days and 45.1 per cent at 6 months. Incidence of SSI was >20 per cent in both groups. Hand function deteriorated after injury but recovered to near pre-injury levels during the study period. CONCLUSIONS: Risk of SSI after hand trauma is high. A definitive RCT of antimicrobial sutures in hand trauma surgery is feasible, if retention is improved. TRIAL REGISTRATION: ISRCTN10771059.


Subject(s)
Anti-Infective Agents, Local , Anti-Infective Agents , Hand Injuries , Adult , Humans , Anti-Infective Agents, Local/therapeutic use , Wrist/surgery , Quality of Life , Hawaii , Surgical Wound Infection/epidemiology , Surgical Wound Infection/prevention & control , Surgical Wound Infection/etiology , Hand Injuries/surgery
19.
BMJ Open ; 13(9): e074367, 2023 09 21.
Article in English | MEDLINE | ID: mdl-37734898

ABSTRACT

OBJECTIVES: Despite growing evidence suggesting increased COVID-19 mortality among people from ethnic minorities, little is known about milder forms of SARS-CoV-2 infection. We sought to explore the association between ethnic background and the probability of testing, testing positive, hospitalisation, COVID-19 mortality and vaccination uptake. DESIGN: A multistate cohort analysis. Participants were followed between 8 April 2020 and 30 September 2021. SETTING: The UK Biobank, which stores medical data on around half a million people who were recruited between 2006 and 2010. PARTICIPANTS: 405 541 subjects were eligible for analysis, limited to UK Biobank participants living in England. 23 891 (6%) of participants were non-white. PRIMARY AND SECONDARY OUTCOME MEASURES: The associations between ethnic background and testing, testing positive, hospitalisation and COVID-19 mortality were studied using multistate survival analyses. The association with single and double-dose vaccination was also modelled. Multistate models adjusted for age, sex and socioeconomic deprivation were fitted to estimate adjusted HRs (aHR) for each of the multistate transitions. RESULTS: 18 172 (4.5%) individuals tested positive, 3285 (0.8%) tested negative and then positive, 1490 (6.9% of those tested positive) were hospitalised, and 129 (0.6%) tested positive at the moment of hospital admission (ie, direct hospitalisation). Finally, 662 (17.4%) died after admission. Compared with white participants, Asian participants had an increased risk of negative to positive transition (aHR 1.24 (95% CI 1.02 to 1.52)), testing positive (95% CI 1.44 (1.33 to 1.55)) and direct hospitalisation (1.61 (95% CI 1.28 to 2.03)). Black participants had an increased risk of hospitalisation following a positive test (1.71 (95% CI 1.29 to 2.27)) and direct hospitalisation (1.90 (95% CI 1.51 to 2.39)). Although not the case for Asians (aHR 1.00 (95% CI 0.98 to 1.02)), black participants had a reduced vaccination probability (0.63 (95% CI 0.62 to 0.65)). In contrast, Chinese participants had a reduced risk of testing negative (aHR 0.64 (95% CI 0.57 to 0.73)), of testing positive (0.40 (95% CI 0.28 to 0.57)) and of vaccination (0.78 (95% CI 0.74 to 0.83)). CONCLUSIONS: We identified inequities in testing, vaccination and COVID-19 outcomes according to ethnicity in England. Compared with whites, Asian participants had increased risks of infection and admission, and black participants had almost double hospitalisation risk, and a 40% lower vaccine uptake.


Subject(s)
COVID-19 , Ethnicity , Humans , Biological Specimen Banks , COVID-19/epidemiology , COVID-19/prevention & control , SARS-CoV-2 , Vaccination , England/epidemiology , Morbidity
20.
JAMA Netw Open ; 6(9): e2333495, 2023 09 05.
Article in English | MEDLINE | ID: mdl-37725377

ABSTRACT

Importance: Ranitidine, the most widely used histamine-2 receptor antagonist (H2RA), was withdrawn because of N-nitrosodimethylamine impurity in 2020. Given the worldwide exposure to this drug, the potential risk of cancer development associated with the intake of known carcinogens is an important epidemiological concern. Objective: To examine the comparative risk of cancer associated with the use of ranitidine vs other H2RAs. Design, Setting, and Participants: This new-user active comparator international network cohort study was conducted using 3 health claims and 9 electronic health record databases from the US, the United Kingdom, Germany, Spain, France, South Korea, and Taiwan. Large-scale propensity score (PS) matching was used to minimize confounding of the observed covariates with negative control outcomes. Empirical calibration was performed to account for unobserved confounding. All databases were mapped to a common data model. Database-specific estimates were combined using random-effects meta-analysis. Participants included individuals aged at least 20 years with no history of cancer who used H2RAs for more than 30 days from January 1986 to December 2020, with a 1-year washout period. Data were analyzed from April to September 2021. Exposure: The main exposure was use of ranitidine vs other H2RAs (famotidine, lafutidine, nizatidine, and roxatidine). Main Outcomes and Measures: The primary outcome was incidence of any cancer, except nonmelanoma skin cancer. Secondary outcomes included all cancer except thyroid cancer, 16 cancer subtypes, and all-cause mortality. Results: Among 1 183 999 individuals in 11 databases, 909 168 individuals (mean age, 56.1 years; 507 316 [55.8%] women) were identified as new users of ranitidine, and 274 831 individuals (mean age, 58.0 years; 145 935 [53.1%] women) were identified as new users of other H2RAs. Crude incidence rates of cancer were 14.30 events per 1000 person-years (PYs) in ranitidine users and 15.03 events per 1000 PYs among other H2RA users. After PS matching, cancer risk was similar in ranitidine compared with other H2RA users (incidence, 15.92 events per 1000 PYs vs 15.65 events per 1000 PYs; calibrated meta-analytic hazard ratio, 1.04; 95% CI, 0.97-1.12). No significant associations were found between ranitidine use and any secondary outcomes after calibration. Conclusions and Relevance: In this cohort study, ranitidine use was not associated with an increased risk of cancer compared with the use of other H2RAs. Further research is needed on the long-term association of ranitidine with cancer development.


Subject(s)
Skin Neoplasms , Thyroid Neoplasms , Female , Humans , Middle Aged , Male , Ranitidine/adverse effects , Cohort Studies , Histamine H2 Antagonists/adverse effects
SELECTION OF CITATIONS
SEARCH DETAIL
...